Global Mixup: Eliminating Ambiguity with Clustering
نویسندگان
چکیده
Data augmentation with Mixup has been proven an effective method to regularize the current deep neural networks. generates virtual samples and corresponding labels simultaneously by linear interpolation. However, one-stage generation paradigm use of interpolation have two defects: (1) The label generated sample is simply combined from original pairs without reasonable judgment, resulting in ambiguous labels. (2) Linear combination significantly restricts sampling space for generating samples. To address these issues, we propose a novel method, Global Mixup, based on global clustering relationships. Specifically, transform previous process into two-stage decoupling labeling. And samples, relabeling performed calculating relationships Furthermore, are no longer restricted relationships, which allows us generate more reliable larger space. Extensive experiments CNN, LSTM, BERT five tasks show that outperforms baselines. Further also demonstrate advantage low-resource scenarios.
منابع مشابه
Eliminating the scattering ambiguity in multifocal, multimodal, multiphoton imaging systems.
In this work we present how to entirely remove the scattering ambiguity present in existing multiphoton multifocal systems. This is achieved through the development and implementation of single-element detection systems that incorporate high-speed photon-counting electronics. These systems can be used to image entire volumes in the time it takes to perform a single transverse scan (four depths ...
متن کاملImproving Document Clustering by Eliminating Unnatural Language
Technical documents contain a fair amount of unnatural language, such as tables, formulas, pseudo-codes, etc. Unnatural language can be an important factor of confusing existing NLP tools. This paper presents an effective method of distinguishing unnatural language from natural language, and evaluates the impact of unnatural language detection on NLP tasks such as document clustering. We view t...
متن کاملmixup: Beyond Empirical Risk Minimization
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple lin...
متن کاملmixup: BEYOND EMPIRICAL RISK MINIMIZATION
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple lin...
متن کاملmixup: BEYOND EMPIRICAL RISK MINIMIZATION
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple lin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i11.26616